114 research outputs found

    Linguistics for the Age of AI

    Get PDF
    A human-inspired, linguistically sophisticated model of language understanding for intelligent agent systems. One of the original goals of artificial intelligence research was to endow intelligent agents with human-level natural language capabilities. Recent AI research, however, has focused on applying statistical and machine learning approaches to big data rather than attempting to model what people do and how they do it. In this book, Marjorie McShane and Sergei Nirenburg return to the original goal of recreating human-level intelligence in a machine. They present a human-inspired, linguistically sophisticated model of language understanding for intelligent agent systems that emphasizes meaning—the deep, context-sensitive meaning that a person derives from spoken or written language. With Linguistics for the Age of AI, McShane and Nirenburg offer a roadmap for creating language-endowed intelligent agents (LEIAs) that can understand,explain, and learn. They describe the language-understanding capabilities of LEIAs from the perspectives of cognitive modeling and system building, emphasizing “actionability”—which involves achieving interpretations that are sufficiently deep, precise, and confident to support reasoning about action. After detailing their microtheories for topics such as semantic analysis, basic coreference, and situational reasoning, McShane and Nirenburg turn to agent applications developed using those microtheories and evaluations of a LEIA's language understanding capabilities. McShane and Nirenburg argue that the only way to achieve human-level language understanding by machines is to place linguistics front and center, using statistics and big data as contributing resources. They lay out a long-term research program that addresses linguistics and real-world reasoning together, within a comprehensive cognitive architecture

    Semantic Classification for Practical Natural Language Processing

    Get PDF
    In the field of natural language processing (NLP) there is now a consensus that all NLP systems that seek to represent and manipulate meanings of texts need an ontology, that is a taxonomic classification of concepts in the world to be used as semantic primitives. In our continued efforts to build a multilingual knowledge-based machine translation (KBMT) system using an interlingual meaning representation, we have developed an ontology to facilitate natural language interpretation and generation. The central goal of the Mikrokosmos project is to develop a computer system that produces a comprehensive Text Meaning Representation (TMR) for an input text in any of a set of source languages. Knowledge that supports this process is stored both in language-specific knowledge sources (such as a lexicon) and in an independently motivated, language-neutral ontology of concepts in the world

    Learning Components of Computational Models from Texts

    Get PDF
    The mental models of experts can be encoded in computational cognitive models that can support the functioning of intelligent agents. This paper compares human mental models to computational cognitive models, and explores the extent to which the latter can be acquired automatically from published sources via automatic learning by reading. It suggests that although model components can be automatically learned, published sources lack sufficient information for the compilation of fully specified models that can support sophisticated agent capabilities, such as physiological simulation and reasoning. Such models require hypotheses and educated guessing about unattested phenomena, which can be provided only by humans and are best recorded using knowledge engineering strategies. This work merges past work on cognitive modeling, agent simulation, learning by reading, and narrative structure, and draws examples from the domain of clinical medicine

    Machine translation of natural languages

    No full text
    • …
    corecore